Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
AI-enabled decision-support systems aim to help medical providers rapidly make decisions with limited information during medical emergencies. A critical challenge in developing these systems is supporting providers in interpreting the system output to make optimal treatment decisions. In this study, we designed and evaluated an AI-enabled decision-support system to aid providers in treating patients with traumatic injuries. We first conducted user research with physicians to identify and design information types and AI outputs for a decision-support display. We then conducted an online experiment with 35 medical providers from six health systems to evaluate two human-AI interaction strategies: (1) AI information synthesis and (2) AI information and recommendations. We found that providers were more likely to make correct decisions when AI information and recommendations were provided compared to receiving no AI support. We also identified two socio-technical barriers to providing AI recommendations during time-critical medical events: (1) an accuracy-time trade-off in providing recommendations and (2) polarizing perceptions of recommendations between providers. We discuss three implications for developing AI-enabled decision support used in time-critical events, contributing to the limited research on human-AI interaction in this context.more » « lessFree, publicly-accessible full text available October 18, 2026
-
Almost half of the preventable deaths in emergency care can be associated with a medical delay. Understanding how clinicians experience delays can lead to improved alert designs to increase delay awareness and mitigation. In this paper, we present the findings from an iterative user-centered design process involving 48 clinicians to develop a prototype alert system for supporting delay awareness in complex medical teamwork such as trauma resuscitation. We used semi-structured interviews and card-sorting workshops to identify the most common delays and elicit design requirements for the prototype alert system. We then conducted a survey to refine the alert designs, followed by near-live, video-guided simulations to investigate clinicians' reactions to the alerts. We contribute to CSCW by designing a prototype alert system to support delay awareness in time-critical, complex teamwork and identifying four mechanisms through which teams mitigate delays.more » « less
-
In clinical settings, most automatic recognition systems use visual or sensory data to recognize activities. These systems cannot recognize activities that rely on verbal assessment, lack visual cues, or do not use medical devices. We examined speech-based activity and activity-stage recognition in a clinical domain, making the following contributions. (1) We collected a high-quality dataset representing common activities and activity stages during actual trauma resuscitation events-the initial evaluation and treatment of critically injured patients. (2) We introduced a novel multimodal network based on audio signal and a set of keywords that does not require a high-performing automatic speech recognition (ASR) engine. (3) We designed novel contextual modules to capture dynamic dependencies in team conversations about activities and stages during a complex workflow. (4) We introduced a data augmentation method, which simulates team communication by combining selected utterances and their audio clips, and showed that this method contributed to performance improvement in our data-limited scenario. In offline experiments, our proposed context-aware multimodal model achieved F1-scores of 73.2±0.8% and 78.1±1.1% for activity and activity-stage recognition, respectively. In online experiments, the performance declined about 10% for both recognition types when using utterance-level segmentation of the ASR output. The performance declined about 15% when we omitted the utterance-level segmentation. Our experiments showed the feasibility of speech-based activity and activity-stage recognition during dynamic clinical events.more » « less
-
Designing computerized approaches to support complex teamwork requires an understanding of how activity-related information is relayed among team members. In this paper, we focus on verbal communication and describe a speech-based model that we developed for tracking activity progression during time-critical teamwork. We situated our study in the emergency medical domain of trauma resuscitation and transcribed speech from 104 audio recordings of actual resuscitations. Using the transcripts, we first studied the nature of speech during 34 clinically relevant activities. From this analysis, we identified 11 communicative events across three different stages of activity performance-before, during, and after. For each activity, we created sequential ordering of the communicative events using the concept of narrative schemas. The final speech-based model emerged by extracting and aggregating generalized aspects of the 34 schemas. We evaluated the model performance by using 17 new transcripts and found that the model reliably recognized an activity stage in 98% of activity-related conversation instances. We conclude by discussing these results, their implications for designing computerized approaches that support complex teamwork, and their generalizability to other safety-critical domains.more » « less
An official website of the United States government
